Current Issue : October-December Volume : 2023 Issue Number : 4 Articles : 5 Articles
Recent advances in deep learning have shown great potential for the automatic generation of medical imaging reports. Deep learning techniques, inspired by image captioning, have made significant progress in the field of diagnostic report generation. This paper provides a comprehensive overview of recent research efforts in deep learning-based medical imaging report generation and proposes future directions in this field. First, we summarize and analyze the data set, architecture, application, and evaluation of deep learning-based medical imaging report generation. Specially, we survey the deep learning architectures used in diagnostic report generation, including hierarchical RNN-based frameworks, attention-based frameworks, and reinforcement learning-based frameworks. In addition, we identify potential challenges and suggest future research directions to support clinical applications and decision-making using medical imaging report generation systems....
Introduction. Recent advancements in technology have propelled the applications of artificial intelligence (AI) in various sectors, including healthcare. Medical imaging has benefited from AI by reducing radiation risks through algorithms used in examinations, referral protocols, and scan justification. This research work assessed the level of knowledge and awareness of 225 secondto fourth-year medical imaging students from public universities in Ghana about AI and its prospects in medical imaging. Methods. This was a cross-sectional quantitative study design that used a closed-ended questionnaire with dichotomous questions, designed on Google Forms, and distributed to students through their various class WhatsApp platforms. Responses were entered into an Excel spreadsheet and analyzed with the Statistical Package for the Social Sciences (SPSS) software version 25.0 and Microsoft Excel 2016 version. Results.The response rate was 80.44% (181/225), out of which 97 (53.6%) were male, 82 (45.3%) were female, and 2 (1.1%) preferred not to disclose their gender. Among these, 133 (73.5%) knew that AI had been incorporated into current imaging modalities, and 143 (79.0%) were aware of AI’s emergence in medical imaging. However, only 97 (53.6%) were aware of the gradual emergence of AI in the radiography industry in Ghana. Furthermore, 160 people (88.4%) expressed an interest in learning more about AI and its applications in medical imaging. Less than one-third (32%) knew about the general basic application of AI in patient positioning and protocol selection. And nearly two-thirds (65%) either felt threatened or unsure about their job security due to the incorporation of AI technology in medical imaging equipment. Less than half (38% and 43%) of the participants acknowledged that current clinical internships helped them appreciate the role of AI in medical imaging or increase their level of knowledge in AI, respectively. Discussion. Generally, the findings indicate that medical imaging students have fair knowledge about AI and its prospects in medical imaging but lack in-depth knowledge. However, they lacked the requisite awareness of AI’s emergence in radiography practice in Ghana. They also showed a lack of knowledge of some general basic applications of AI in modern imaging equipment. Additionally, they showed some level of misconception about the role AI plays in the job of the radiographer. Conclusion. Decision-makers should implement educational policies that integrate AI education into the current medical imaging curriculum to prepare students for the future. Students should also be practically exposed to the various incorporations of AI technology in current medical imaging equipment....
Nowadays, brain tumors have become a leading cause of mortality worldwide. The brain cells in the tumor grow abnormally and badly affect the surrounding brain cells. These cells could be either cancerous or non-cancerous types, and their symptoms can vary depending on their location, size, and type. Due to its complex and varying structure, detecting and classifying the brain tumor accurately at the initial stages to avoid maximum death loss is challenging. This research proposes an improved fine-tuned model based on CNN with ResNet50 and U-Net to solve this problem. This model works on the publicly available dataset known as TCGA-LGG and TCIA. The dataset consists of 120 patients. The proposed CNN and fine-tuned ResNet50 model are used to detect and classify the tumor or no-tumor images. Furthermore, the U-Net model is integrated for the segmentation of the tumor regions correctly. The model performance evaluation metrics are accuracy, intersection over union, dice similarity coefficient, and similarity index. The results from fine-tuned ResNet50 model are IoU: 0.91, DSC: 0.95, SI: 0.95. In contrast, U-Net with ResNet50 outperforms all other models and correctly classified and segmented the tumor region....
Purpose: To provide a summary of the research advances on ocular images-based artificial intelligence on systemic diseases. Methods: Narrative literature review. Results: Ocular images-based artificial intelligence has been used in a variety of systemic diseases, including endocrine, cardiovascular, neurological, renal, autoimmune, and hematological diseases, and many others. However, the studies are still at an early stage. The majority of studies have used AI only for diseases diagnosis, and the specific mechanisms linking systemic diseases to ocular images are still unclear. In addition, there are many limitations to the research, such as the number of images, the interpretability of artificial intelligence, rare diseases, and ethical and legal issues. Conclusion: While ocular images-based artificial intelligence is widely used, the relationship between the eye and the whole body should be more clearly elucidated....
Deep learning-based medical image analysis technology has been developed to the extent that it shows an accuracy surpassing the ability of a human radiologist in some tasks. However, data labeling on medical images requires human experts and a great deal of time and expense. Moreover, medical image data usually have an imbalanced distribution for each disease. In particular, in multilabel classification, learning with a small number of labeled data causes overfitting problems. The model easily overfits the limited number of labeled data, while it still underfits the large amount of unlabeled data. In this study, we propose a method that combines entropy-based Mixup and self-training to improve the performance of data-imbalanced chest X-ray classification. The proposed method is to apply the Mixup algorithm to limited labeled data to alleviate the data imbalance problem and perform self-training that effectively utilizes the unlabeled data while iterating this process by replacing the teacher model with the student model. Experimental results in an environment with a limited number of labeled data and a large number of unlabeled data showed that the classification performance was improved by combining entropy-based Mixup and self-training....
Loading....